124 research outputs found
How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?
There has been recent interest in leveraging federated learning (FL) for
radio signal classification tasks. In FL, model parameters are periodically
communicated from participating devices, training on their own local datasets,
to a central server which aggregates them into a global model. While FL has
privacy/security advantages due to raw data not leaving the devices, it is
still susceptible to several adversarial attacks. In this work, we reveal the
susceptibility of FL-based signal classifiers to model poisoning attacks, which
compromise the training process despite not observing data transmissions. In
this capacity, we develop an attack framework in which compromised FL devices
perturb their local datasets using adversarial evasion attacks. As a result,
the training process of the global model significantly degrades on
in-distribution signals (i.e., signals received over channels with identical
distributions at each edge device). We compare our work to previously proposed
FL attacks and reveal that as few as one adversarial device operating with a
low-powered perturbation under our attack framework can induce the potent model
poisoning attack to the global classifier. Moreover, we find that more devices
partaking in adversarial poisoning will proportionally degrade the
classification performance.Comment: 6 pages, Accepted to IEEE ICC 202
Decentralized Event-Triggered Federated Learning with Heterogeneous Communication Thresholds
A recent emphasis of distributed learning research has been on federated
learning (FL), in which model training is conducted by the data-collecting
devices. Existing research on FL has mostly focused on a star topology learning
architecture with synchronized (time-triggered) model training rounds, where
the local models of the devices are periodically aggregated by a centralized
coordinating node. However, in many settings, such a coordinating node may not
exist, motivating efforts to fully decentralize FL. In this work, we propose a
novel methodology for distributed model aggregations via asynchronous,
event-triggered consensus iterations over the network graph topology. We
consider heterogeneous communication event thresholds at each device that weigh
the change in local model parameters against the available local resources in
deciding the benefit of aggregations at each iteration. Through theoretical
analysis, we demonstrate that our methodology achieves asymptotic convergence
to the globally optimal learning model under standard assumptions in
distributed learning and graph consensus literature, and without restrictive
connectivity requirements on the underlying topology. Subsequent numerical
results demonstrate that our methodology obtains substantial improvements in
communication requirements compared with FL baselines.Comment: 8 page
Digital Ethics in Federated Learning
The Internet of Things (IoT) consistently generates vast amounts of data,
sparking increasing concern over the protection of data privacy and the
limitation of data misuse. Federated learning (FL) facilitates collaborative
capabilities among multiple parties by sharing machine learning (ML) model
parameters instead of raw user data, and it has recently gained significant
attention for its potential in privacy preservation and learning efficiency
enhancement. In this paper, we highlight the digital ethics concerns that arise
when human-centric devices serve as clients in FL. More specifically,
challenges of game dynamics, fairness, incentive, and continuity arise in FL
due to differences in perspectives and objectives between clients and the
server. We analyze these challenges and their solutions from the perspectives
of both the client and the server, and through the viewpoints of centralized
and decentralized FL. Finally, we explore the opportunities in FL for
human-centric IoT as directions for future development
Event-Triggered Decentralized Federated Learning over Resource-Constrained Edge Devices
Federated learning (FL) is a technique for distributed machine learning (ML),
in which edge devices carry out local model training on their individual
datasets. In traditional FL algorithms, trained models at the edge are
periodically sent to a central server for aggregation, utilizing a star
topology as the underlying communication graph. However, assuming access to a
central coordinator is not always practical, e.g., in ad hoc wireless network
settings. In this paper, we develop a novel methodology for fully decentralized
FL, where in addition to local training, devices conduct model aggregation via
cooperative consensus formation with their one-hop neighbors over the
decentralized underlying physical network. We further eliminate the need for a
timing coordinator by introducing asynchronous, event-triggered communications
among the devices. In doing so, to account for the inherent resource
heterogeneity challenges in FL, we define personalized communication triggering
conditions at each device that weigh the change in local model parameters
against the available local resources. We theoretically demonstrate that our
methodology converges to the globally optimal learning model at a
rate under standard assumptions in distributed
learning and consensus literature. Our subsequent numerical evaluations
demonstrate that our methodology obtains substantial improvements in
convergence speed and/or communication savings compared with existing
decentralized FL baselines.Comment: 23 pages. arXiv admin note: text overlap with arXiv:2204.0372
Submodel Partitioning in Hierarchical Federated Learning: Algorithm Design and Convergence Analysis
Hierarchical federated learning (HFL) has demonstrated promising scalability
advantages over the traditional "star-topology" architecture-based federated
learning (FL). However, HFL still imposes significant computation,
communication, and storage burdens on the edge, especially when training a
large-scale model over resource-constrained Internet of Things (IoT) devices.
In this paper, we propose hierarchical independent submodel training (HIST), a
new FL methodology that aims to address these issues in hierarchical settings.
The key idea behind HIST is a hierarchical version of model partitioning, where
we partition the global model into disjoint submodels in each round, and
distribute them across different cells, so that each cell is responsible for
training only one partition of the full model. This enables each client to save
computation/storage costs while alleviating the communication loads throughout
the hierarchy. We characterize the convergence behavior of HIST for non-convex
loss functions under mild assumptions, showing the impact of several attributes
(e.g., number of cells, local and global aggregation frequency) on the
performance-efficiency tradeoff. Finally, through numerical experiments, we
verify that HIST is able to save communication costs by a wide margin while
achieving the same target testing accuracy.Comment: 14 pages, 4 figure
Federated Learning with Communication Delay in Edge Networks
Federated learning has received significant attention as a potential solution
for distributing machine learning (ML) model training through edge networks.
This work addresses an important consideration of federated learning at the
network edge: communication delays between the edge nodes and the aggregator. A
technique called FedDelAvg (federated delayed averaging) is developed, which
generalizes the standard federated averaging algorithm to incorporate a
weighting between the current local model and the delayed global model received
at each device during the synchronization step. Through theoretical analysis,
an upper bound is derived on the global model loss achieved by FedDelAvg, which
reveals a strong dependency of learning performance on the values of the
weighting and learning rate. Experimental results on a popular ML task indicate
significant improvements in terms of convergence speed when optimizing the
weighting scheme to account for delays.Comment: Accepted for publication at IEEE Global Communications Conference
(Globecom 2020
- …